2,667 research outputs found

    Track-based alignement using a Kalman filter technique

    Get PDF
    An iterative method for track-based global alignment is proposed. It is derived from the Kalman filter and designed to avoid the inversion of large matrices. The update formulas for the alignment parameters and for the associated covariance matrix are described. The implementation and the computational complexity are discussed, and we show how to limit the latter to an acceptable level by restricting the update to detectors that are close in the sense of a certain metric. The performance of the Kalman filter in terms of precision and speed of convergence is studied with simulated tracks. Results from an implementation in the CMS reconstruction program CMSSW are presented, using two sections of the barrel part of the CMS Tracker

    Application of the Kalman Alignment Algorithm to the CMS Tracker

    Get PDF
    One of the main components of the CMS experiment is the Silicon Tracker. This device, designed to measure the trajectories of charged particles, is composed of approximately 16,000 planar silicon detector modules, which makes it the biggest of its kind. However, systematic measurement errors, caused by unavoidable inaccuracies in the construction and assembly phase, reduce the precision of the measurements significantly. The geometrical corrections that are therefore required to be known to an accuracy that is better than the intrinsic resolution of the detector modules. The Kalman Alignment Algorithm is a novel approach to extract a set of alignment constants from a large collection of recorded particle tracks, and is applicable for a system even as big as the CMS Tracker. To show that the method is functional and well understood, and thus suitable for the data-taking period of the CMS experiment, two case studies are presented and discussed here

    A Large-scale Application of the Kalman Alignment Algorithm to the CMS Tracker

    Get PDF
    The Kalman alignment algorithm has been specifically developed to cope with the demands that arise from the specifications of the CMS Tracker. The algorithmic concept is based on the Kalman filter formalism and is designed to avoid the inversion of large matrices. Most notably, the algorithm strikes a balance between conventional global and local track-based alignment algorithms, by restricting the computation of alignment parameters not only to alignable objects hit by the same track, but also to all other alignable objects that are significantly correlated. Nevertheless, this feature also comes with various trade-offs: Mechanisms are needed that affect which alignable objects are significantly correlated and keep track of these correlations. Due to the large amount of alignable objects involved at each update (at least compared to local alignment algorithms), the time spent for retrieving and writing alignment parameters as well as the required user memory becomes a significant factor. The large-scale test presented here applies the Kalman alignment algorithm to the (misaligned) CMS Tracker barrel, and demonstrates the feasability of the algorithm in a realistic scenario. It is shown that both the computation time and the amount of required user memory are within reasonable bounds, given the available computing resources, and that the obtained results are satisfactory

    Representation and Estimation of Trajectories from Two-body Decays

    Get PDF
    A novel parametrization of the trajectories stemming from two-body decays is presented, based on the kinematics of the decay. The core component of this parametrization is a decay model which is derived using the relativistic energy-momentum conservation law and geometrical fundamentals. The estimation of the decay parameters, also including the beam profile and a mass constraint, is described. Some applications in realistics scenarios are presented. In addition, the representation of the trajectories for the use in track-based alignment algorithms is briefly discussed

    Reconstruction of electrons with the Gaussian-sum filter in the CMS tracker at LHC

    Full text link
    The bremsstrahlung energy loss distribution of electrons propagating in matter is highly non Gaussian. Because the Kalman filter relies solely on Gaussian probability density functions, it might not be an optimal reconstruction algorithm for electron tracks. A Gaussian-sum filter (GSF) algorithm for electron track reconstruction in the CMS tracker has therefore been developed. The basic idea is to model the bremsstrahlung energy loss distribution by a Gaussian mixture rather than a single Gaussian. It is shown that the GSF is able to improve the momentum resolution of electrons compared to the standard Kalman filter. The momentum resolution and the quality of the estimated error are studied with various types of mixture models of the energy loss distribution.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, LaTeX, 14 eps figures. PSN TULT00

    Confluence Modulo Equivalence in Constraint Handling Rules

    Get PDF
    Previous results on proving confluence for Constraint Handling Rules are extended in two ways in order to allow a larger and more realistic class of CHR programs to be considered confluent. Firstly, we introduce the relaxed notion of confluence modulo equivalence into the context of CHR: while confluence for a terminating program means that all alternative derivations for a query lead to the exact same final state, confluence modulo equivalence only requires the final states to be equivalent with respect to an equivalence relation tailored for the given program. Secondly, we allow non-logical built-in predicates such as var/1 and incomplete ones such as is/2, that are ignored in previous work on confluence. To this end, a new operational semantics for CHR is developed which includes such predicates. In addition, this semantics differs from earlier approaches by its simplicity without loss of generality, and it may also be recommended for future studies of CHR. For the purely logical subset of CHR, proofs can be expressed in first-order logic, that we show is not sufficient in the present case. We have introduced a formal meta-language that allows reasoning about abstract states and derivations with meta-level restrictions that reflect the non-logical and incomplete predicates. This language represents subproofs as diagrams, which facilitates a systematic enumeration of proof cases, pointing forward to a mechanical support for such proofs

    An Adaptive Interacting Wang-Landau Algorithm for Automatic Density Exploration

    Full text link
    While statisticians are well-accustomed to performing exploratory analysis in the modeling stage of an analysis, the notion of conducting preliminary general-purpose exploratory analysis in the Monte Carlo stage (or more generally, the model-fitting stage) of an analysis is an area which we feel deserves much further attention. Towards this aim, this paper proposes a general-purpose algorithm for automatic density exploration. The proposed exploration algorithm combines and expands upon components from various adaptive Markov chain Monte Carlo methods, with the Wang-Landau algorithm at its heart. Additionally, the algorithm is run on interacting parallel chains -- a feature which both decreases computational cost as well as stabilizes the algorithm, improving its ability to explore the density. Performance is studied in several applications. Through a Bayesian variable selection example, the authors demonstrate the convergence gains obtained with interacting chains. The ability of the algorithm's adaptive proposal to induce mode-jumping is illustrated through a trimodal density and a Bayesian mixture modeling application. Lastly, through a 2D Ising model, the authors demonstrate the ability of the algorithm to overcome the high correlations encountered in spatial models.Comment: 33 pages, 20 figures (the supplementary materials are included as appendices

    Accelerating Bayesian hierarchical clustering of time series data with a randomised algorithm

    Get PDF
    We live in an era of abundant data. This has necessitated the development of new and innovative statistical algorithms to get the most from experimental data. For example, faster algorithms make practical the analysis of larger genomic data sets, allowing us to extend the utility of cutting-edge statistical methods. We present a randomised algorithm that accelerates the clustering of time series data using the Bayesian Hierarchical Clustering (BHC) statistical method. BHC is a general method for clustering any discretely sampled time series data. In this paper we focus on a particular application to microarray gene expression data. We define and analyse the randomised algorithm, before presenting results on both synthetic and real biological data sets. We show that the randomised algorithm leads to substantial gains in speed with minimal loss in clustering quality. The randomised time series BHC algorithm is available as part of the R package BHC, which is available for download from Bioconductor (version 2.10 and above) via http://bioconductor.org/packages/2.10/bioc/html/BHC.html. We have also made available a set of R scripts which can be used to reproduce the analyses carried out in this paper. These are available from the following URL. https://sites.google.com/site/randomisedbhc/
    • …
    corecore